enemy target
Could the Abrams live until 2030 and beyond?
Fox News Flash top headlines are here. Check out what's clicking on Foxnews.com. It destroyed Iraqi T-72 tanks in the Gulf War in now-famous tank battles, using highly accurate, long-range thermal sensors able to destroy targets without being seen itself. It patrolled the streets in Iraq in 2003. It is a major mechanized attack platform with massive amounts of fire-power and an "intimidating" presence when used as a psychological deterrent.
Military AI Would Direct Weapons To Hit Enemy Targets Within Milliseconds - Report
The US military is massively speeding the development of AI to reduce soldier casualties, according to Fox News on Thursday. The establishment of high-speed, multi-domain warfare by 2030, in which AI technology can take advantage of'networks' to hit multiple targets with Hellfire missiles as a means of decreasing the risk faced by soldiers in the field is a primary objective of the Army's emerging AI Task Force. Easely outlined the current difficulty of amalgamating data, explaining that today's methods have "stovepiped" sensor systems, which organise vast amounts of incoming data. AI will streamline and "fuse" disparate sources to build a "sensor fusion" throughout multiple combat platforms. The Brigadier General included Multi-Spectral sensors and Synthetic Aperture Radar (SAR) as examples of technologies that will benefit from AI-empowered data processing.
Army pursues new virtual soldier training for future war
Fox News Flash top headlines for Oct. 8 are here. Check out what's clicking on Foxnews.com Exploding enemy targets with precision artillery, "lasing" ground targets for drone air attack and waging close-combat urban warfare with hand-carried small arms -- are all scenarios entertained recently in high-tech virtual training wargame designed to closely replicate anticipated future warfare. The exercise, intended to virtually "create" high-threat, multi-domain modern warfare, was intended to move the Army closer to its goal of engineering a new "force-on-force" mobile training technology designed to prepare soldiers for the risks and perils of a new kind of war. "This was a computer-based simulation down to the individual model -- using real-time data and responding in a real-world manner," Col. Chris Cassibry, Maneuver Capabilities Development and Integration Directorate's Concepts Development Division director, recently told reporters.
We must fight the invasion of the killer robots
"Killer robots" are taking over. Also known as autonomous weapons, these devices, once activated, can destroy targets without human intervention. The technology has been with us for years. In 1959, the US Navy started using the Phalanx Close-In Weapon System, an autonomous defense device that can spot and attack anti-ship missiles, helicopters and similar threats. In 2014, Russia announced that killer robots would guard five of its ballistic missile installations. That same year, Israel deployed the Harpy, an autonomous weapon that can stay airborne for nine hours to identify and pick off enemy targets from enormous distances.
- North America > United States (0.95)
- Europe > Russia (0.30)
- Asia > Russia (0.30)
- (3 more...)
- Government > Regional Government > North America Government > United States Government (0.72)
- Government > Military > Navy (0.57)
Congruence between model and human attention reveals unique signatures of critical visual events
Current computational models of bottom-up and top-down components of attention are predictive of eye movements across a range of stimuli and of simple, fixed visual tasks (such as visual search for a target among distractors). However, to date there exists no computational framework which can reliably mimic human gaze behavior in more complex environments and tasks, such as driving a vehicle through traffic. Here, we develop a hybrid computational/behavioral framework, combining simple models for bottom-up salience and top-down relevance, and looking for changes in the predictive power of these components at different critical event times during 4.7 hours (500,000 video frames) of observers playing car racing and flight combat video games. This approach is motivated by our observation that the predictive strengths of the salience and relevance models exhibit reliable temporal signatures during critical event windows in the task sequence--for example, when the game player directly engages an enemy plane in a flight combat game, the predictive strength of the salience model increases significantly, while that of the relevance model decreases significantly. Our new framework combines these temporal signatures to implement several event detectors. Critically, we find that an event detector based on fused behavioral and stimulus information (in the form of the model's predictive strength) is much stronger than detectors based on behavioral information alone (eye position) or image information alone (model prediction maps). This approach to event detection, based on eye tracking combined with computational models applied to the visual input, may have useful applications as a less-invasive alternative to other event detection approaches based on neural signatures derived from EEG or fMRI recordings.
- North America > United States > California > Los Angeles County > Los Angeles (0.28)
- North America > United States > Minnesota > Hennepin County > Minneapolis (0.14)
Congruence between model and human attention reveals unique signatures of critical visual events
Current computational models of bottom-up and top-down components of attention are predictive of eye movements across a range of stimuli and of simple, fixed visual tasks (such as visual search for a target among distractors). However, to date there exists no computational framework which can reliably mimic human gaze behavior in more complex environments and tasks, such as driving a vehicle through traffic. Here, we develop a hybrid computational/behavioral framework, combining simple models for bottom-up salience and top-down relevance, and looking for changes in the predictive power of these components at different critical event times during 4.7 hours (500,000 video frames) of observers playing car racing and flight combat video games. This approach is motivated by our observation that the predictive strengths of the salience and relevance models exhibit reliable temporal signatures during critical event windows in the task sequence--for example, when the game player directly engages an enemy plane in a flight combat game, the predictive strength of the salience model increases significantly, while that of the relevance model decreases significantly. Our new framework combines these temporal signatures to implement several event detectors. Critically, we find that an event detector based on fused behavioral and stimulus information (in the form of the model's predictive strength) is much stronger than detectors based on behavioral information alone (eye position) or image information alone (model prediction maps). This approach to event detection, based on eye tracking combined with computational models applied to the visual input, may have useful applications as a less-invasive alternative to other event detection approaches based on neural signatures derived from EEG or fMRI recordings.
- North America > United States > California > Los Angeles County > Los Angeles (0.28)
- North America > United States > Minnesota > Hennepin County > Minneapolis (0.14)
Congruence between model and human attention reveals unique signatures of critical visual events
Current computational models of bottom-up and top-down components of attention arepredictive of eye movements across a range of stimuli and of simple, fixed visual tasks (such as visual search for a target among distractors). However, todate there exists no computational framework which can reliably mimic human gaze behavior in more complex environments and tasks, such as driving a vehicle through traffic. Here, we develop a hybrid computational/behavioral framework, combining simple models for bottom-up salience and top-down relevance, andlooking for changes in the predictive power of these components at different critical event times during 4.7 hours (500,000 video frames) of observers playing car racing and flight combat video games. This approach is motivated by our observation that the predictive strengths of the salience and relevance models exhibitreliable temporal signatures during critical event windows in the task sequence--for example, when the game player directly engages an enemy plane in a flight combat game, the predictive strength of the salience model increases significantly, while that of the relevance model decreases significantly. Our new framework combines these temporal signatures to implement several event detectors. Critically,we find that an event detector based on fused behavioral and stimulus information (in the form of the model's predictive strength) is much stronger than detectors based on behavioral information alone (eye position) or image information alone(model prediction maps). This approach to event detection, based on eye tracking combined with computational models applied to the visual input, may have useful applications as a less-invasive alternative to other event detection approaches based on neural signatures derived from EEG or fMRI recordings.
- North America > United States > California > Los Angeles County > Los Angeles (0.28)
- North America > United States > Minnesota > Hennepin County > Minneapolis (0.14)